54 research outputs found

    Towards automatic pulmonary nodule management in lung cancer screening with deep learning

    Get PDF
    The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.Comment: Published on Scientific Report

    Sediment source fingerprinting: benchmarking recent outputs, remaining challenges and emerging themes

    Get PDF
    Abstract: Purpose: This review of sediment source fingerprinting assesses the current state-of-the-art, remaining challenges and emerging themes. It combines inputs from international scientists either with track records in the approach or with expertise relevant to progressing the science. Methods: Web of Science and Google Scholar were used to review published papers spanning the period 2013–2019, inclusive, to confirm publication trends in quantities of papers by study area country and the types of tracers used. The most recent (2018–2019, inclusive) papers were also benchmarked using a methodological decision-tree published in 2017. Scope: Areas requiring further research and international consensus on methodological detail are reviewed, and these comprise spatial variability in tracers and corresponding sampling implications for end-members, temporal variability in tracers and sampling implications for end-members and target sediment, tracer conservation and knowledge-based pre-selection, the physico-chemical basis for source discrimination and dissemination of fingerprinting results to stakeholders. Emerging themes are also discussed: novel tracers, concentration-dependence for biomarkers, combining sediment fingerprinting and age-dating, applications to sediment-bound pollutants, incorporation of supportive spatial information to augment discrimination and modelling, aeolian sediment source fingerprinting, integration with process-based models and development of open-access software tools for data processing. Conclusions: The popularity of sediment source fingerprinting continues on an upward trend globally, but with this growth comes issues surrounding lack of standardisation and procedural diversity. Nonetheless, the last 2 years have also evidenced growing uptake of critical requirements for robust applications and this review is intended to signpost investigators, both old and new, towards these benchmarks and remaining research challenges for, and emerging options for different applications of, the fingerprinting approach

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Proceedings of the 3rd Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2015: advancing efficient methodologies through community partnerships and team science

    Get PDF
    It is well documented that the majority of adults, children and families in need of evidence-based behavioral health interventionsi do not receive them [1, 2] and that few robust empirically supported methods for implementing evidence-based practices (EBPs) exist. The Society for Implementation Research Collaboration (SIRC) represents a burgeoning effort to advance the innovation and rigor of implementation research and is uniquely focused on bringing together researchers and stakeholders committed to evaluating the implementation of complex evidence-based behavioral health interventions. Through its diverse activities and membership, SIRC aims to foster the promise of implementation research to better serve the behavioral health needs of the population by identifying rigorous, relevant, and efficient strategies that successfully transfer scientific evidence to clinical knowledge for use in real world settings [3]. SIRC began as a National Institute of Mental Health (NIMH)-funded conference series in 2010 (previously titled the “Seattle Implementation Research Conference”; $150,000 USD for 3 conferences in 2011, 2013, and 2015) with the recognition that there were multiple researchers and stakeholdersi working in parallel on innovative implementation science projects in behavioral health, but that formal channels for communicating and collaborating with one another were relatively unavailable. There was a significant need for a forum within which implementation researchers and stakeholders could learn from one another, refine approaches to science and practice, and develop an implementation research agenda using common measures, methods, and research principles to improve both the frequency and quality with which behavioral health treatment implementation is evaluated. SIRC’s membership growth is a testament to this identified need with more than 1000 members from 2011 to the present.ii SIRC’s primary objectives are to: (1) foster communication and collaboration across diverse groups, including implementation researchers, intermediariesi, as well as community stakeholders (SIRC uses the term “EBP champions” for these groups) – and to do so across multiple career levels (e.g., students, early career faculty, established investigators); and (2) enhance and disseminate rigorous measures and methodologies for implementing EBPs and evaluating EBP implementation efforts. These objectives are well aligned with Glasgow and colleagues’ [4] five core tenets deemed critical for advancing implementation science: collaboration, efficiency and speed, rigor and relevance, improved capacity, and cumulative knowledge. SIRC advances these objectives and tenets through in-person conferences, which bring together multidisciplinary implementation researchers and those implementing evidence-based behavioral health interventions in the community to share their work and create professional connections and collaborations

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Docker Containers for Deep Learning Experiments

    No full text
    <p>Deep learning is a powerful tool to solve problems in the area of image analysis. The dominant compute platform for deep learning is Nvidia’s proprietary CUDA, which can only be used together with Nvidia graphics cards. The nivida-docker project allows exposing Nvidia graphics cards to docker containers and thus makes it possible to run deep learning experiments in docker containers.</p><p>In our department, we use deep learning to solve problems in the area of medical image analysis and use docker containers to offer researchers a unified way to set up experiments on machines with inhomogeneous hardware configurations. By running experiments in docker containers, researches can set up their own custom software environments which often depend on the medical image modality that is being analyzed. Experiments can be archived in a docker registry and easily be moved between computers. Differences in hardware configurations can be hidden through the system configuration of the base system. This way, container environments remain largely the same even across different computers.</p><p>Using graphics hardware from docker containers, however, also introduces extra complications: CUDA uses C-like code that is compiled to binaries that are not necessarily compatible between graphics cards. It is also possible, due to the lack of proper hardware virtualization, to crash the Nvidia driver on the base system which will affect all other containers running on the system.</p><p>Allowing researchers to define their own runtime environments for their experiments using containers made archiving of experiments more viable. Experiments do not depend on local system configurations anymore and therefore can be moved between systems and expected run. Using graphics hardware from docker containers introduces more complexity, but generally works and should make deep learning experiments more repeatable in the long run.</p

    Brock malignancy risk calculator for pulmonary nodules : validation outside a lung cancer screening population

    No full text
    OBJECTIVE: To assess the performance of the Brock malignancy risk model for pulmonary nodules detected in routine clinical setting. METHODS: In two academic centres in the Netherlands, we established a list of patients aged ≥40 years who received a chest CT scan between 2004 and 2012, resulting in 16 850 and 23 454 eligible subjects. Subsequent diagnosis of lung cancer until the end of 2014 was established through linking with the National Cancer Registry. A nested case-control study was performed (ratio 1:3). Two observers used semiautomated software to annotate the nodules. The Brock model was separately validated on each data set using ROC analysis and compared with a solely size-based model. RESULTS: After the annotation process the final analysis included 177 malignant and 695 benign nodules for centre A, and 264 malignant and 710 benign nodules for centre B. The full Brock model resulted in areas under the curve (AUCs) of 0.90 and 0.91, while the size-only model yielded significantly lower AUCs of 0.88 and 0.87, respectively (p99%. DISCUSSION: The Brock model shows high predictive discrimination of potentially malignant and benign nodules when validated in an unselected, heterogeneous clinical population. The high NPV may be used to decrease the number of nodule follow-up examinations
    corecore